An Explicit Quasi-Newton Update for Sparse Optimization Calculations

نویسندگان

  • Angelo Lucia
  • ANGELO LUCIA
چکیده

A new quasi-Newton updating formula for sparse optimization calculations is presented. It makes combined use of a simple strategy for fixing symmetry and a Schubert correction to the upper triangle of a permuted Hessian approximation. Interesting properties of this new update are that it is closed form and that it does not satisfy the secant condition at every iteration of the calculations. Some numerical results are given that show that this update compares favorably with the sparse PSB update and appears to have a superlinear rate of convergence.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...

متن کامل

QuickeNing: A Generic Quasi-Newton Algorithm for Faster Gradient-Based Optimization∗

We propose an approach to accelerate gradient-based optimization algorithms by giving them the ability to exploit curvature information using quasi-Newton update rules. The proposed scheme, called QuickeNing, is generic and can be applied to a large class of first-order methods such as incremental and block-coordinate algorithms; it is also compatible with composite objectives, meaning that it ...

متن کامل

A Projection Operator Approach to the Optimization of Trajectory Functionals

We develop a Newton method for the optimization of trajectory functionals. Through the use of a trajectory tracking nonlinear projection operator, the dynamically constrained optimization problem is converted into an unconstrained problem, making many aspects of the algorithm rather transparent. Examples: first and second order optimality conditions, search direction and step length calculation...

متن کامل

Newton-Based Optimization for Nonnegative Tensor Factorizations

Tensor factorizations with nonnegative constraints have found application in analyzing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g., count data), which leads to sparse tensors that can be modeled by sparse factor matrices. In this paper we investigate efficient techniques for computing an app...

متن کامل

A Modified Orthant-Wise Limited Memory Quasi-Newton Method with Convergence Analysis

The Orthant-Wise Limited memory QuasiNewton (OWL-QN) method has been demonstrated to be very effective in solving the l1regularized sparse learning problem. OWL-QN extends the L-BFGS from solving unconstrained smooth optimization problems to l1-regularized (non-smooth) sparse learning problems. At each iteration, OWL-QN does not involve any l1regularized quadratic optimization subproblem and on...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010